Two Steps at a Time---Taking GAN Training in Stride with Tseng's Method

نویسندگان

چکیده

Motivated by the training of Generative Adversarial Networks (GANs), we study methods for solving minimax problems with additional nonsmooth regularizers. We do so employing \emph{monotone operator} theory, in particular \emph{Forward-Backward-Forward (FBF)} method, which avoids known issue limit cycling correcting each update a second gradient evaluation. Furthermore, propose seemingly new scheme recycles old gradients to mitigate computational cost. In doing rediscover related \emph{Optimistic Gradient Descent Ascent (OGDA)}. For both schemes prove novel convergence rates convex-concave via unifying approach. The derived error bounds are terms gap function ergodic iterates. deterministic and stochastic problem show rate O(\nicefrac1k) O(\nicefrac1k), respectively. complement our theoretical results empirical improvements Wasserstein GANs on CIFAR10 dataset.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Two examples of discrete-time quantum walks taking continuous steps

This note introduces some examples of quantum random walks in R and proves the weak convergence of their rescaled n-step densities. One of the examples is called the Plancherel quantum walk because the “quantum coin flip” is the Fourier Integral (or Plancherel) Transform. The other examples are the Birkhoff quantum walks, so named because the coin flips are effected by means of measure preservi...

متن کامل

A discrete-time quantum walk taking continuous steps

This note introduces a quantum random walk in R and proves the weak convergence of its rescaled n-step densities. Quantum walks of the type we consider in this note were introduced in [1], which defined and analyzed the Hadamard quantum walk on Z, and a “new type of convergence theorem” for such quantum walks on Z was discovered by Konno [4, 5]. A much simpler proof of Konno’s theorem has recen...

متن کامل

Taking positive steps.

In antiquity the Oracle at Delphi urged each to "know thyself." Socrates followed with the observation that "The unexamined life is not worth living." Aristotle called for a balance in creating the "good life" centering on the "golden mean." In the second century A.D. Marcus Aurelius, emperor of the Roman empire (the closest the western world may have ever come to a philosopher king), reminded ...

متن کامل

Training Triplet Networks with GAN

Triplet networks are widely used models that are characterized by good performance in classification and retrieval tasks. In this work we propose to train a triplet network by putting it as the discriminator in Generative Adversarial Nets (GANs). We make use of the good capability of representation learning of the discriminator to increase the predictive quality of the model. We evaluated our a...

متن کامل

Transport of Pollutant in Shallow Water a Two Time Steps Kinetic Method

The aim of this paper is to present a finite volume kinetic method to compute the transport of a passive pollutant by a flow modeled by the shallow water equations using a new time discretization that allows large time steps for the pollutant computation. For the hydrodynamic part the kinetic solver ensures – even in the case of a non flat bottom – the preservation of the steady state of a lake...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM journal on mathematics of data science

سال: 2022

ISSN: ['2577-0187']

DOI: https://doi.org/10.1137/21m1420939